perm filename V247.TEX[TEX,DEK]1 blob sn#389940 filedate 1978-10-21 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00003 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00002 00002	\input acphdr % Section 4.7
C00003 00003	%folio 635 galley 5b (C) Addison-Wesley 1978	*
C00012 ENDMK
C⊗;
\input acphdr % Section 4.7
\runninglefthead{ARITHMETIC} % chapter title
\titlepage\setcount00
\null
\vfill
\tenpoint
\ctrline{SECTION 4.7 of THE ART OF COMPUTER PROGRAMMING}
\ctrline{$\copyright$ 1978 Addison--Wesley Publishing Company, Inc.}
\vfill
\runningrighthead{MANIPULATION OF POWER SERIES}
\section{4.7}
\eject
\setcount0 486
%folio 635 galley 5b (C) Addison-Wesley 1978	*
\sectionbegin{\star4.7. MANIPULATION OF POWER SERIES}
I{\:cF WE ARE GIVEN} two power series
$$U(z) = U↓0 + U↓1z + U↓2z↑2 +\cdotss,\qquad V(z) = V↓0 + V↓1z
+ V↓2z↑2 +\cdotss,\eqno (1)$$
whose coefficients belong to a field, we can form
their sum, their product, their quotient, etc., to obtain new
power series. A polynomial is obviously a special case of a
power series, in which there are only finitely many terms.

Of course, only a finite number of terms can be
represented and stored within a computer, so it makes sense
to ask whether power series arithmetic is even possible on computers;
and if it is possible, what makes it different from polynomial
arithmetic? The answer is that we work with only the first $N$
coefficients of the power series, where $N$ is a parameter that
may in principle be arbitrarily large; instead of ordinary polynomial
arithmetic, we are essentially doing polynomial arithmetic modulo
$z↑N$, and this often leads to a seomewhat different point of
view. Furthermore, special operations like ``reversion'' can
be performed on power series but not on polynomials, since polynomials
are not closed under these operations.

Manipulation of power series has several applications
to numerical analysis, but perhaps its greatest use is the determination
of asymptotic expansions (as we have seen in Section 1.2.11.3),
or the calculation of quantities defined by certain generating
functions. The latter applications make it desirable to calculate
the coefficients exactly, instead of with floating-point arithmetic.
All of the algorithms in this section, with obvious exceptions,
can be done using rational operations only, so the techniques
of Section 4.5.1 can be used to obtain exact results when desired.

The calculation of $W(z) = U(z) \pm V(z)$ is, of
course, trivial, since we have $W↓n = U↓n \pm V↓n$ for $n =
0$, 1, 2, $\ldotss\,$. It is also easy to calculate $W(z) = U(z)V(z)$,
using the familiar ``Cauchy product rule'':
$$\chop to 9pt{W↓n = \sum ↓{0≤k≤n} U↓kV↓{n-k} = U↓0V↓n + U↓1V↓{n-1} +\cdots
+ U↓nV↓0.}\eqno (2)$$

The quotient $W(z) = U(z)/V(z)$, when $V↓0
≠ 0$, can be obtained by interchanging $U$ and $W$ in (2); we
obtain the rule
$$\eqalignno{W↓n ⊗= \bigglp U↓n - \sum ↓{0≤k<n} W↓kV↓{n-k}\biggrp\vcenter{
\hjust{\:@\char'36}}V↓0\cr
⊗= (U↓n - W↓0V↓n - W↓1V↓{n-1} -\cdots - W↓{n-1}V↓1)/V↓0.⊗(3)\cr}$$
This recurrence relation for the $W$'s makes it easy
to determine $W↓0$, $W↓1$, $W↓2$, $\ldots$ successively, without inputting
$U↓n$ and $V↓n$ until after $W↓{n-1}$ has been computed. Let
us say that a power-series manipulation algorithm with the latter
property is ``on-line''; an on-line algorithm can be used to
determine $N$ coefficients $W↓0$, $W↓1$, $\ldotss$, $W↓{N-1}$ of the
result without knowing $N$ in advance, so it is possible in
theory to run the algorithm indefinitely and compute the entire
power series; or to run it until a certain condition is met.\xskip
(The opposite of ``on-line'' is ``off-line.'')

If the coefficients $U↓k$ and $V↓k$ are integers
but the $W↓k$ are not, recurrence (3) involves computation with
fractions. This can be avoided by the all-integer approach described
in exercise 2.

Let us now consider the operation of computing
$W(z) = V(z)↑α$, where $α$ is an ``arbitrary'' power. For
example, we could calculate the square root of $V(z)$ by taking
$α = {1\over 2}$, or we could find $V(z)↑{-10}$ or even $V(z)↑π$.
If $V↓m$ is the first nonzero coefficient of $V(z)$, we have
$$\baselineskip15pt\eqalign{V(z)⊗= V↓{m\,}z↑m\biglp 1 + (V↓{m+1}/V↓m)z
+ (V↓{m+2}/V↓m)z↑2 +\cdotss\bigrp ,\cr
V(z)↑α ⊗= V↑{α}↓{\!m\,}z↑{αm}\biglp 1 + (V↓{m+1}/V↓m)z + (V↓{m+2}/V↓m)z↑2
+\cdotss\bigrp ↑α.\cr}\eqno(4)$$
This will be a power series if and only if $αm$ is a
nonnegative integer. From (4) we can see that the problem of
computing general powers can be reduced to the case that $V↓0
= 1$; then the problem is to find coefficients of
$$W(z) = (1 + V↓1z + V↓2z↑2 + V↓3z↑3 +\cdotss)↑α.\eqno(5)$$
Clearly $W↓0 = 1↑α = 1$.

The obvious way to find the coefficients of (5)
is to use the binomial theorem (Eq.\ 1.2.9--19), or (if $α$ is
a positive integer) to try repeated squaring as in Section 4.6.3;
but a much simpler and more efficient device for calculating
powers has been suggested by J. C. P. Miller.\xskip [See P. Henrici,
{\sl JACM \bf 3} (1956), 10--15.]\xskip If $W(z) = V(z)↑α$, we have
by differentiation
$$W↓1 + 2W↓2z + 3W↓3z↑2 +\cdots = W↑\prime (z) =
αV(z)↑{α-1}V↑\prime(z);\eqno (6)$$
therefore
$$W↑\prime(z)V(z) = αW(z)V↑\prime(z).\eqno (7)$$
If we now equate the coefficients of $z↑{n-1}$
in (7), we find that
$$\sum ↓{0≤k≤n}kW↓kV↓{n-k} = α\sum ↓{0≤k≤n} (n - k)W↓kV↓{n-k},\eqno
(8)$$
and this gives us a useful computational rule valid for all $n≥1$:
$$\eqalignno{W↓n ⊗= \sum ↓{1≤k≤n} \left(\left(α + 1\over n\right)\,k
- 1\right)\,V↓kW↓{n-k}\cr
⊗= \biglp (α{+}1{-}n)V↓1W↓{n-1} + (2α{+}2{-}n)V↓2W↓{n-2}
+\cdots + nαV↓n\bigrp /n.⊗(9)\cr}$$
This equation leads to a simple on-line algorithm by which we can successively
determine $W↓1$, $W↓2$, $\ldotss$, using approximately $2n$ multiplications to
compute the $n$th coefficient. Note the special case $α=-1$, in which (9)
becomes the special case $U(z)=V↓0=1$ of (3).

A similar technique can be used to form $f\biglp V(z)\bigrp$ when $f$ is any
function that satisfies a simple differential equation.\xskip(For example, see
exercise 4.)\xskip A comparatively straightforward ``power-series method'' is
often used to obtain the solution of differential equations; this technique is
explained in nearly all textbooks about differential equations.